首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1372篇
  免费   82篇
  国内免费   161篇
测绘学   336篇
大气科学   234篇
地球物理   150篇
地质学   294篇
海洋学   145篇
天文学   13篇
综合类   134篇
自然地理   309篇
  2024年   18篇
  2023年   68篇
  2022年   172篇
  2021年   191篇
  2020年   163篇
  2019年   109篇
  2018年   68篇
  2017年   114篇
  2016年   60篇
  2015年   67篇
  2014年   50篇
  2013年   81篇
  2012年   80篇
  2011年   48篇
  2010年   33篇
  2009年   35篇
  2008年   34篇
  2007年   36篇
  2006年   35篇
  2005年   23篇
  2004年   14篇
  2003年   14篇
  2002年   13篇
  2001年   19篇
  2000年   10篇
  1999年   17篇
  1998年   7篇
  1997年   8篇
  1996年   5篇
  1995年   6篇
  1994年   4篇
  1993年   2篇
  1992年   1篇
  1991年   5篇
  1989年   2篇
  1987年   1篇
  1985年   1篇
  1984年   1篇
排序方式: 共有1615条查询结果,搜索用时 15 毫秒
51.
传统机器学习算法已广泛应用于矿产预测,但面对地质大数据的高维稀疏、不平衡小样本等特性仍缺乏有效处理和分析的方法,设计适合地质大数据特点的机器学习算法是智能矿产预测亟需解决的新问题。本文以内蒙古浩布高地区的铅锌多金属矿产预测为例,提出了一种面向地质大数据的半监督协同训练矿产预测模型。首先对研究区地质找矿信息和地球化学异常信息进行定量分析,提取断裂构造、二叠系地层、燕山期侵入岩、地层与岩体接触带、围岩蚀变及Pb、Zn、Sn、Cu地球化学异常共9种找矿因子。然后利用递归特征消除法优选找矿因子组合,不包括Sn异常在内的8个找矿因子组合被选为最优组合。最后,利用支持向量机和随机森林算法作为基分类器进行半监督协同训练矿产预测,绘制成矿概率分布图。ROC曲线和预测度曲线分析结果表明,半监督协同训练模型的AUC值和预测效率都高于随机森林和支持向量机模型。研究结果也为大数据环境下的智能矿产预测提供了一种新的思路。  相似文献   
52.
基于岩石图像深度学习的岩性自动识别与分类方法   总被引:8,自引:3,他引:5  
张野  李明超  韩帅 《岩石学报》2018,34(2):333-342
岩石岩性的识别与分类对于地质分析极为重要,采用机器学习的方法建立识别模型进行自动分类是一条新的途径。基于Inception-v3深度卷积神经网络模型,建立了岩石图像集分析的深度学习迁移模型,运用迁移学习方法实现了岩石岩性的自动识别与分类。采用此方法对所采集的173张花岗岩图像、152张千枚岩图像和246张角砾岩图像进行了学习和识别分类研究,通过训练学习建立岩石图像深度学习迁移模型,并分别采用训练集和测试集中的岩石图像对模型进行了检验分析。对于训练集中的岩石图像,每组岩石分别用3张图像测试,三种岩石的岩性分类均正确,且分类概率值均达到90%以上,显示了模型良好的鲁棒性;对于测试集中的岩石图像,每组岩石分别采用9张图像进行识别分析,三种岩石的岩性分类均正确,并且千枚岩组图像分类概率均高于90%,但是花岗岩组2张图像和角砾岩组的1张图像分类概率值不足70%,概率值较其他岩石图像低,推测其原因是训练集中相同模式的岩石图像较少,导致模型的泛化能力减小。为了提高识别精确度,对准确率较低的岩石图像进行截取,分别取其中的3张图像加入训练集进行再训练,增加与测试图像具有相同模式的训练样本;在新的模型中,对3张图像进行二次检验,测试概率值均达到85%以上,说明在数据足够的状况下模型具有良好的学习能力。与传统的机器学习方法相比,所提出的岩石图像深度学习方法具有以下优点:第一,模型通过搜索图像像素点提取物体特征,不需要手动提取待分类物体特征;第二,对于图像像素大小,成像距离及光照要求低;第三,采用适当的训练集可获得较好的识别分类效果,并具有良好鲁棒性和泛化能力。  相似文献   
53.
刘艳鹏  朱立新  周永章 《岩石学报》2018,34(11):3217-3224
大数据人工智能地质学刚刚起步,基于大数据智能算法的地质研究是非常有意义的探索性实验。利用大数据和机器学习解决矿产预测问题,有助于人们克服不能全面考虑地质变量的困难及评估当前模型在已有数据中的可靠性。元素地表分布特征量主要受原岩成分、成矿作用影响和地表过程的影响,它们携带某些指示矿体就位的信息,即矿体在地下空间就位时在地表的响应,且未在地表过程中消失。以往的地球化学勘查工作仅仅识别异常,但未能发现矿体在地表响应的成矿特征量。本文以安徽省兆吉口铅锌矿床为例,通过机器学习,利用卷积神经网络算法,不断挖掘元素Pb分布特征与矿体地下就位空间的耦合相关性。经过1000次训练后,可以得到准确率0. 93,损失率0. 28的卷积神经网络模型。这种神经网络模型就是矿体在地下就位时元素在地表分布的响应,可以用来进行矿产资源预测。应用该模型对未知区进行预测,结果显示第53号区域具有很大概率存在尚未发现的矿体。  相似文献   
54.
Water cycle includes natural water circulation and social economic system water cycle. The concept of virtual water provides a new method and means for studying social water circulation. This paper is based on the theory of water circulation in social and economic system, using input-output analysis method quantitatively describes the Tarim River basin of social water cycle paths and analysis of water resources management in Tarim River basin sustainable development process of the key issues. The results show that the main sectors of virtual water export in the Tarim River Basin are agriculture, petroleum, natural gas and food industry. Agricultural water accounts for more than 98% of total water consumption, most of which is transferred to the food and textile industries, and the food industry export water from the agricultural sector. Shandong Province is the largest virtual water transport area in the Tarim River Basin. The main sector of virtual water input in the Tarim River Basin is the metallurgical industry. Finally, in view of the problems arising from the inter-industrial and inter-regional social water circulation in the Tarim River basin, the paper puts forward the ways and strategies of regulating agricultural and industrial water use in the Tarim River Basin.  相似文献   
55.
基于重庆市境内长江航道雷达站拍摄的雾天气过程影像资料,利用K最近邻、支持向量机、BP神经网络、随机森林等机器学习算法,对无雾和5类有雾天气个例进行图像识别训练,构建雾图像识别模型,并检验了识别准确率。结果表明:机器学习能够有效识别雾图像,随机森林算法的识别效果优于其余3种算法。对于能见度超过1500 m的无雾天气,模型的识别准确率为100%,对于能见度在1000—1500 m范围内的轻雾、能见度低于50 m的强浓雾,模型的识别准确率在90%以上,对于能见度在50—1000 m范围内的雾、大雾和浓雾,识别准确率超过70%。  相似文献   
56.
An unsupervised machine-learning workflow is proposed for estimating fractional landscape soils and vegetation components from remotely sensed hyperspectral imagery. The workflow is applied to EO-1 Hyperion satellite imagery collected near Ibirací, Minas Gerais, Brazil. The proposed workflow includes subset feature selection, learning, and estimation algorithms. Network training with landscape feature class realizations provide a hypersurface from which to estimate mixtures of soil (e.g. 0.5 exceedance for pixels: 75% clay-rich Nitisols, 15% iron-rich Latosols, and 1% quartz-rich Arenosols) and vegetation (e.g. 0.5 exceedance for pixels: 4% Aspen-like trees, 7% Blackberry-like trees, 0% live grass, and 2% dead grass). The process correctly maps forests and iron-rich Latosols as being coincident with existing drainages, and correctly classifies the clay-rich Nitisols and grasses on the intervening hills. These classifications are independently corroborated visually (Google Earth) and quantitatively (random soil samples and crossplots of field spectra). Some mapping challenges are the underestimation of forest fractions and overestimation of soil fractions where steep valley shadows exist, and the under representation of classified grass in some dry areas of the Hyperion image. These preliminary results provide impetus for future hyperspectral studies involving airborne and satellite sensors with higher signal-to-noise and smaller footprints.  相似文献   
57.
Building damage maps after disasters can help us to better manage the rescue operations. Researchers have used Light Detection and Ranging (LiDAR) data for extracting the building damage maps. For producing building damage maps from LiDAR data in a rapid manner, it is necessary to understand the effectiveness of features and classifiers. However, there is no comprehensive study on the performance of features and classifiers in identifying damaged areas. In this study, the effectiveness of three texture extraction methods and three fuzzy systems for producing the building damage maps was investigated. In the proposed method, at first, a pre-processing stage was utilized to apply essential processes on post-event LiDAR data. Second, textural features were extracted from the pre-processed LiDAR data. Third, fuzzy inference systems were generated to make a relation between the extracted textural features of buildings and their damage extents. The proposed method was tested across three areas over the 2010 Haiti earthquake. Three building damage maps with overall accuracies of 75.0%, 78.1% and 61.4% were achieved. Based on outcomes, the fuzzy inference systems were stronger than random forest, bagging, boosting and support vector machine classifiers for detecting damaged buildings.  相似文献   
58.
SensePlace3 (SP3) is a geovisual analytics framework and web application that supports overview + detail analysis of social media, focusing on extracting meaningful information from the Twitterverse. SP3 leverages social media related to crisis events. It differs from most existing systems by enabling an analyst to obtain place-relevant information from tweets that have implicit as well as explicit geography. Specifically, SP3 includes not just the ability to utilize the explicit geography of geolocated tweets but also analyze implicit geography by recognizing and geolocating references in both tweet text, which indicates locations tweeted about, and in Twitter profiles, which indicates locations affiliated with users. Key features of SP3 reported here include flexible search and filtering capabilities to support information foraging; an ingest, processing, and indexing pipeline that produces near real-time access for big streaming data; and a novel strategy for implementing a web-based multi-view visual interface with dynamic linking of entities across views. The SP3 system architecture was designed to support crisis management applications, but its design flexibility makes it easily adaptable to other domains. We also report on a user study that provided input to SP3 interface design and suggests next steps for effective spatiotemporal analytics using social media sources.  相似文献   
59.
Rapid flood mapping is critical for local authorities and emergency responders to identify areas in need of immediate attention. However, traditional data collection practices such as remote sensing and field surveying often fail to offer timely information during or right after a flooding event. Social media such as Twitter have emerged as a new data source for disaster management and flood mapping. Using the 2015 South Carolina floods as the study case, this paper introduces a novel approach to mapping the flood in near real time by leveraging Twitter data in geospatial processes. Specifically, in this study, we first analyzed the spatiotemporal patterns of flood-related tweets using quantitative methods to better understand how Twitter activity is related to flood phenomena. Then, a kernel-based flood mapping model was developed to map the flooding possibility for the study area based on the water height points derived from tweets and stream gauges. The identified patterns of Twitter activity were used to assign the weights of flood model parameters. The feasibility and accuracy of the model was evaluated by comparing the model output with official inundation maps. Results show that the proposed approach could provide a consistent and comparable estimation of the flood situation in near real time, which is essential for improving the situational awareness during a flooding event to support decision-making.  相似文献   
60.
For many researchers, government agencies, and emergency responders, access to the geospatial data of US electric power infrastructure is invaluable for analysis, planning, and disaster recovery. Historically, however, access to high quality geospatial energy data has been limited to few agencies because of commercial licenses restrictions, and those resources which are widely accessible have been of poor quality, particularly with respect to reliability. Recent efforts to develop a highly reliable and publicly accessible alternative to the existing datasets were met with numerous challenges – not the least of which was filling the gaps in power transmission line voltage ratings. To address the line voltage rating problem, we developed and tested a basic methodology that fuses knowledge and techniques from power systems, geography, and machine learning domains. Specifically, we identified predictors of nominal voltage that could be extracted from aerial imagery and developed a tree-based classifier to classify nominal line voltage ratings. Overall, we found that line support height, support span, and conductor spacing are the best predictors of voltage ratings, and that the classifier built with these predictors had a reliable predictive accuracy (that is, within one voltage class for four out of the five classes sampled). We applied our approach to a study area in Minnesota.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号